An Example of an Improvable Rao–Blackwell Improvement, Inefficient Maximum Likelihood Estimator, and Unbiased Generalized Bayes Estimator

نویسندگان

  • Tal Galili
  • Isaac Meilijson
چکیده

The Rao-Blackwell theorem offers a procedure for converting a crude unbiased estimator of a parameter θ into a "better" one, in fact unique and optimal if the improvement is based on a minimal sufficient statistic that is complete. In contrast, behind every minimal sufficient statistic that is not complete, there is an improvable Rao-Blackwell improvement. This is illustrated via a simple example based on the uniform distribution, in which a rather natural Rao-Blackwell improvement is uniformly improvable. Furthermore, in this example the maximum likelihood estimator is inefficient, and an unbiased generalized Bayes estimator performs exceptionally well. Counterexamples of this sort can be useful didactic tools for explaining the true nature of a methodology and possible consequences when some of the assumptions are violated. [Received December 2014. Revised September 2015.].

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Classic and Bayes Shrinkage Estimation in Rayleigh Distribution Using a Point Guess Based on Censored Data

Introduction      In classical methods of statistics, the parameter of interest is estimated based on a random sample using natural estimators such as maximum likelihood or unbiased estimators (sample information). In practice,  the researcher has a prior information about the parameter in the form of a point guess value. Information in the guess value is called as nonsample information. Thomp...

متن کامل

Bayesin estimation and prediction whit multiply type-II censored sample of sequential order statistics from one-and-two-parameter exponential distribution

In this article introduce the sequential order statistics. Therefore based on multiply Type-II censored sample of sequential order statistics, Bayesian estimators are derived for the parameters of one- and two- parameter exponential distributions under the assumption that the prior distribution is given by an inverse gamma distribution and the Bayes estimator with respect to squared error loss ...

متن کامل

Estimating a Bounded Normal Mean Relative to Squared Error Loss Function

Let be a random sample from a normal distribution with unknown mean and known variance The usual estimator of the mean, i.e., sample mean is the maximum likelihood estimator which under squared error loss function is minimax and admissible estimator. In many practical situations, is known in advance to lie in an interval, say for some In this case, the maximum likelihood estimator...

متن کامل

Estimating a Bounded Normal Mean Under the LINEX Loss Function

Let X be a random variable from a normal distribution with unknown mean θ and known variance σ2. In many practical situations, θ is known in advance to lie in an interval, say [−m,m], for some m > 0. As the usual estimator of θ, i.e., X under the LINEX loss function is inadmissible, finding some competitors for X becomes worthwhile. The only study in the literature considered the problem of min...

متن کامل

Inference on Pr(X > Y ) Based on Record Values From the Power Hazard Rate Distribution

In this article, we consider the problem of estimating the stress-strength reliability $Pr (X > Y)$ based on upper record values when $X$ and $Y$ are two independent but not identically distributed random variables from the power hazard rate distribution with common scale parameter $k$. When the parameter $k$ is known, the maximum likelihood estimator (MLE), the approximate Bayes estimator and ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره 70  شماره 

صفحات  -

تاریخ انتشار 2016